Next: The interface Up: Abstract Previous: Introduction

Related work

A variety of efforts have been made to simplify the process of generating 3D models, including the ``idea sketching'' described by Akeo et al. [1]. Akeo allows users to scan real sketches into the computer where they are ``marked-up'' with perspective vanishing lines and 3D cross sections. The scanned data is then projected onto the 3D mark-up to complete the process.

Nearly all CAD applications employ some form of 2D sketching, although sketching is rarely used in 3D views. A notable exception is Artifice's Design Workshop [2], which allows cubes, walls, and constructive solid geometry (CSG) operations to be constructed directly in the 3D view. However, the overall style of interaction is still menu-oriented and the set of primitives is small.

The considerable work done in the area of drawing interpretation, surveyed by Wang and Grinstein [28], focuses solely on interpreting an entire line drawing at once. In contrast, we attempt to provide a complete interface for progressively conceptualizing 3D scenes using aspects of drawing interpretation to recognize primitives from a gesture stream. Viking [20] uses a constraint based approach to derive 3D geometry from 2D sketches. In Viking, the user draws line segments, and the system automatically generates a number of constraints which then must be satisfied in order to re-create a 3D shape. The difficulty with these approaches is that even though they are generally restricted to polygonal objects, they are often slow and difficult to implement. In addition, they are often intolerant of noisy input and may either be unable to find a reasonable 3D solution, or may find an unexpected solution. Branco et al. [5] combine drawing interpretation with more traditional 3D modeling tools, like CSG operators in order to simplify the interpretation process; however, their system is limited by a menu-oriented interaction style and does not consider constructing and editing full 3D scenes.

Deering [10], Sachs et al. [22], Galyean and Hughes [11], and Butterworth et al. [7] take a very different approach to constructing 3D models that requires 3D input devices as the primary input mechanism. A variety of systems have incorporated gesture recognition into their user interfaces, including Rubine [21], who uses gesture recognition in a 2D drawing program, but we know of no systems that have extended the use of gesture recognition for 3D modeling.

We also use a variety of direct-manipulation interaction techniques for transforming 3D objects that are related to the work of Snibbe et al. [25], and Strauss and Cary [27]. In addition, we also exploit some very simple flexible constrained manipulation techniques that are similar to those described by Bukowski and Sequin [6]. The latter automatically generates motion constraints for an object directly from that object's semantics. Therefore, for example, when a picture frame is dragged around a room, the frame's back always remains flush with some wall in the room to avoid unnatural situations in which the picture frame might float in mid-air. Also, when a table is manipulated, all of the objects that are on top of the table are automatically moved as well.

In our system, since we have less semantic information than Bukowski, we have less opportunity to automatically generate appropriate constraints, and therefore we occasionally require the user to explicitly sketch constraints in addition to geometry. Our constraint techniques are fast, flexible and almost trivial to implement, but they are not as powerful as the constrained manipulation described by Gleicher [12] or Sistare [24]. Although Gleicher exploits the fact that constraints always start off satisfied, thereby reducing constraint satisfaction to constraint maintenance, he still must solve systems of equations during each manipulation which are often slow and subject to numerical instability. Other approaches like Bier's snap-dragging [4] are also related to our constrained manipulation, although we never present the user with a set of constraint choices from which to select.

Lansdown and Schofield [17] and Salisbury et al. [23] provide interesting techniques for non-photorealistic rendering, although none of these systems specifically targets interactive rendering.

Next: The interface Up: Abstract Previous: Introduction